This project is meant as a help for anyone who wants to explore U.S. wages, learn how to reason about data visualization, study distribution modeling, or build a complete web app inside a single HTML page using only vanilla JavaScript.
I use this document not just as a guide to the interface and settings, and not just as an explanation of the occupation tree structure. It is also a filterable walk through of how the project was made. Depending on your preferences, it can reveal or hide details of the methods used to build CDF and PDF curves from quantiles, drawing ideas from statistics, math (calculus and geometry), data visualization practice and guidelines, and the supporting technologies Python, HTML, CSS, and JavaScript.
At the center is the violin, a chart type usually used to summarize distributions with a kernel density estimate built from raw data. In this project, however, the violins do the opposite. They act as a data deaggregator, reconstructing an approximate distribution shape from a small set of quantiles. This proved a harder problem than it looked at first sight, which is exactly why it became a challenge, and why I stayed stubbornly determined to reach a solution. What you will see here is a walk through my own path of thinking along the way.
The project is an interactive page for exploring U.S. wages using “violin” shapes. Each violin stands for one occupation or one major group. Read it like this: wages run left to right, and the shape gets thicker where many people earn around that wage, and thinner where only a few do. It is the same idea as a crowd on a sidewalk: wide where it is packed, narrow where it is sparse. Technically, the violin is built from a density curve (PDF), which is just a smooth way to describe how concentrated the wages are at each level. In basic probability, that curve is scaled so its total area equals 1. Here, I multiply it by the number of workers, so the violin’s total area reflects how many people are in that occupation. Since the horizontal axis is reserved for wages and stays fixed, “more workers” shows up as a thicker violin, not as a longer one. After the shapes are built, they are placed vertically in a separate step using an optimization method that reduces overlap, keeps the layout balanced, and makes labels easier to read.
The main view supports two scales: a linear scale and an adaptive scale. The adaptive scale gives more screen space to wage ranges where the data is crowded, and less space where it is sparse. Instead of using a log scale or an inverse hyperbolic sine transform, it uses the data density itself to map coordinates. You can spot the adaptive scale through the tighter or looser tick spacing, changes in background shading, and the axis label size. Switch between the two scales by double clicking or double tapping the axis.
Hovering over (or tapping) an occupation’s violin highlights it and opens a tooltip. The tooltip shows a scale to fit mini view of the distribution (violin, PDF, or CDF), together with the key quantiles ✧q10, ✧q25, ✦q50, ✧q75, and ✧q90. To hint at numerical uncertainty, the extrapolated tips fade out gradually.
A click or tap pins the tooltip. This switches it from a quick tooltip into a small investigation tool you can control. Once pinned, you can cycle its distribution view (violin, PDF, CDF) with a long press on mobile or a right click on desktop. The CDF view is especially useful when you want to judge, by eye, how well the reconstruction behaves between the quantile anchors.
In the pinned state, a draggable band estimates how many workers fall within a chosen wage interval. Clicking the info label cycles the band width from 500 to 50,000 dollars in multiples of 2 or 5. Depending on context, clicking the major category header either focuses that major category or returns to the main view. Double clicking the graph toggles the band between free movement (100 dollar resolution) and snapped movement, where it jumps in steps equal to the selected bandwidth.
The main view works like a map: you can stay zoomed out for the big picture, or zoom in to explore one major category in detail. In the global view, even a thin slice of vertical space can contain dozens of detailed occupations, so major categories are given distinct colors and are visually separated from one another.
To zoom into a major category, double tap or double click any violin within that category. In focus mode, double clicking anywhere on the violin area (except the wage axis) restores the global view. For convenience in global mode, violins are clipped to the major envelope, but hover or tap interaction can still identify violins that are only partially visible.
A context menu provides quick access to common actions, including focusing (zooming) a major category, toggling between linear and adaptive scales, searching for occupations, showing or hiding labels, and adjusting color settings such as contrast, luminosity, intensity, and label colors.
This project aims for a smooth experience without turning the screen into a cockpit. Settings are kept to a minimum on purpose, and anything I consider nonessential starts hidden. Most global settings live in the context menu (long press on mobile, right click on desktop). Others are attached to the tooltip itself, using simple gestures like single tap, double tap, or long press on the tooltip elements.
By default, the wage axis uses a linear scale. Unlike nonlinear options such as logarithmic, square root, or inverse hyperbolic sine, a linear scale preserves proportional distances and makes reciprocal comparisons natural. That is why it is usually the preferred choice in data visualization.
When the data becomes too dense to read comfortably, nonlinear scales can create breathing room, but they do it by bending positions and changing how sizes feel. In this project the wage range is truncated at $240,000, so a linear scale is often enough. Still, for categories where most values are crowded toward the lower end, I also provide an adaptive scale.
The adaptive scale is built from the global distribution, but its deformation is tamed so that adjacent tick gaps follow a monotone compression ramp: they start near a baseline spacing, shrink smoothly through the dense region, and then clamp to a constant spacing in the tail. Like any nonlinear scale, it introduces distortion, so the design makes its presence hard to miss: tick spacing changes, background shading shifts, and labels adjust to signal that the axis is warped. The transition is animated in JavaScript to show what is moving where when you switch between linear and adaptive (and back). You can set the scale mode from the context menu, and it defaults to linear. You can also toggle it by double tapping the wage axis area.
In textbook probability, the area under a density curve is 1. In this project, the violins are scaled by worker counts, so the area of a violin grows with employment. The side effect is that some categories, even when they contain lots of occupations, can become almost line thin at the global scale.
To deal with that, you can zoom in on any major category or detailed occupation by double tapping its violin or its label. To return to the global view, double click the violin drawing area (not the axis area, where double click switches between linear and adaptive scales). The context menu also detects the category under the pointer and names the zoom action explicitly, for example, “Zoom Sales …”.
I put significant effort into making the wage map readable. The violin placement algorithm includes an optimization component that tries to expose as many labels as possible without turning the view into clutter.
In focus view, you can choose what percentage of labels to show, which is mainly a way to fine tune how many of the large occupations (those with high worker counts) stay visible. A “hide overlapping labels” option, enabled by default, helps keep the view readable. If you prefer labels that match category colors, there is also a “colored labels” option. It is hidden by default and starts disabled.
Exploration would not feel complete without a natural search feature. Search is always visible in the context menu. As you type the first characters of an occupation name, the interface adaptively hides labels that do not match and highlights the matching substring in those that do.
This behavior applies to detailed occupations. For major categories, a star indicates that the category contains at least one occupation whose name matches the pattern. Matching is case insensitive and Unicode aware.
Pressing “Go” (Enter) navigates to the first match (ordered by number of workers), highlights it, and opens its tooltip. If the match is outside the current focus, the view automatically focuses the appropriate major category first. Clearing the search pattern restores the label visibility state that was active before the search.
As a nonessential feature, global luminosity, contrast, and color intensity can be adjusted. These options are hidden by default in the context menu.
The tooltip is the central visual element of the interface, providing details for a highlighted occupation or major category. A click or tap pins it, and in this pinned state several parts become interactive.
The titles can be used to expand or collapse a category, indicated by the “≺” (expand) and “≻” (collapse) symbols. The draggable band estimates how many workers fall within a wage interval, and shows the amounts inside and outside that interval. This estimate is derived from the reconstructed distribution, not from raw microdata..
Single tapping the displayed values cycles through a range of bandwidths. Double tapping the graphic toggles band snapping between free movement and steps equal to the selected bandwidth. A long press (right click on desktop) cycles through three distribution views, scaled to fit the tooltip: the violin (unscaled by workers), the PDF (the positive half of the violin), and the CDF, which shows cumulative density and helps assess the goodness of fit of the reconstruction.
For this project I used the most recent national OEWS release (2024) published on the Bureau of Labor Statistics page, from which I downloaded the XLSX file.
For this project I used the following fields.
OCC_TITLE Occupation nameOCC_CODE Occupation code, used for building data hierarchyO_GROUP 'total', 'major' and 'detailed' items are used for filtering dataA_PCT10 A_PCT25 A_MEDIAN A_PCT75 A_PCT90 yearly wage quantilesTOT_EMP Registered workers for every occupationEach OCC_CODE follows a “##-####” pattern. The overall total uses the code “00-0000”, major categories use “##-0000”, and detailed occupations use the full “##-####” form. A detailed occupation and its major category share the same first two digits. Detailed-occupation quantiles are used to reconstruct each violin shape. For a major category, the envelope is built by summing the height contributions of all its detailed occupations along the wage axis, and the overall envelope is built by doing the same over all detailed occupations.
Data-wise speaking, the occupations form a tree: total, major categories, and detailed occupations underneath majors. The hierarchy drives grouping, colors, and focus behavior.
Majors act like chapters that can be expanded/collapsed or focused. Details are leaf nodes shown in dense views or on demand.
Once I committed to expanding five quantiles into a full probability distribution, the goal sounded almost simple: start from a small set of interior quantiles and reconstruct a smooth, monotone CDF via nonlinear interpolation and extrapolation, then obtain the corresponding PDF by numerical differentiation, in a way that stays visually stable and statistically robust.
In practice, it was anything but simple. I tested several parametric models often used for wage data, fitting the quantiles to families such as LN-Pareto, GB2, LN-GPD (C1 anchored), dPLN (Double Pareto Lognormal), Singh-Maddala (Burr XII), Dagum (Type I), and the Generalized Gamma (Stacy). I also tried more flexible, self-adjusting constructions that enforce monotonicity while still matching the five quantiles, including Hermite based interpolators, rational Ball curves, P-splines, and NURBS, with and without PDF tail tapering toward zero. Many of these worked well for large parts of the dataset, but none was dependable across every occupation, and that need for consistency is what kept the search going.
In the dataset, each occupation comes with only five quantiles that can be used to reconstruct a violin: A_PCT10, A_PCT25, A_MEDIAN, A_PCT75, A_PCT90 at probabilities p = [0.10, 0.25, 0.50, 0.75, 0.90]. But to draw a full violin shape, and especially to build the outer envelope contours for major categories, I also need endpoints. That means inferring q0 and q100 for p = [0.0, 1.0].
I tried several extrapolation ideas, from heuristic rules to statistical and numerical approaches: linear and log linear slope trends, geometric progression style rules, and Tukey fence bounds. They tended to fail for the same reason. With only five points, the outer gaps matter a lot, and many methods either become too sensitive to those gaps or depend on choices that are hard to justify.
The approach I settled on extends the quantile curve beyond the [q10, q90] range while keeping the next goal in view: the CDF must stay smooth and strictly increasing. The steps are: compute monotonicity preserving Fritsch–Carlson slopes from the five known quantiles; extrapolate additional slopes toward p = 0 and p = 1 (with a log fallback to keep them positive); convert those endpoint slopes into provisional q0 and q100; then recompute slopes using all seven points and iterate a few times until the endpoints and slopes agree with each other. These endpoint estimates are then treated as provisional anchors for the next modeling step.
Honestly, I expected this part to be straightforward. I thought the hard work was in the previous step, estimating q0 and q100. Once I had endpoints, it felt like the rest should follow naturally from monotone cubic Hermite interpolation with slope limiting (for example, in the Fritsch-Carlson style). It keeps the curve monotone, stays smooth, and avoids overshoot.
What I did not expect was what happens after differentiation. The CDF looked fine, but its derivative, the PDF, was not C1 smooth, and it often behaved strangely near q0 and q100. The tails in particular ended in ways that did not look right, which made me doubt the endpoint estimates themselves.
From there I started crossing methods off the list, one by one: univariate splines, generic splines, parametric splines, and NURBS (with and without tolerances). I tried different optimizers and different goals, like smoothness, goodness of fit at the five quantiles, unimodality constraints, and forcing the PDF to taper to zero at both ends.
At one point I nearly settled on a custom monotone transformation that takes an initially smooth raw PDF and reshapes it into a unimodal form with tails tapered to zero. I also looked closely at how that kind of transformation flows through derivatives into the final shape. That work turned into a useful by product that I may publish separately, because it touches a broader question: monotone transformations that preserve a geometric notion of variation in the data. Still, even when the result looked good, the fit became too permissive, so I went back and tried classic wage families instead. That is when I tested models such as LN-Pareto, GB2, LN-GPD (C1 anchored), dPLN (Double Pareto Lognormal), Singh-Maddala (Burr XII), Dagum (Type I), and the generalized gamma (Stacy). The results were even less satisfying than the smoothing approaches. At that point it was clear this was the second challenge, and it was going to be more complex than the first.
After the earlier attempts failed, I moved to a more involved route: optimization. The five reported quantiles are treated as noisy observations. Instead of forcing the curve to pass through them exactly, I keep them inside a tolerance band, which is closer to how the data should be read in the first place.
The optimizer searches within a low dimensional family of monotone spline or shape bases for x(p). For each candidate, it evaluates the curve on a dense p grid, then derives the implied PDF. There are safeguards to avoid near vertical stretches in x(p), because those would turn into sharp spikes in the PDF.
The objective is a weighted combination of three things: (1) quantile residuals, handled as banded penalties, (2) smoothness of x(p), measured through curvature or second difference energy, and (3) anti needle terms that discourage very small dx/dp or extreme peak to tail ratios. In some cases it also allows a small drift in the effective p locations of the outer anchors, so the fit does not have to force an implausible amount of tail mass.
Tail tapering happens after the core fit. Each tail is handled with a conservative post process: first a smooth Hermite like tip is appended so the PDF decays to zero beyond the current endpoint. Then a monotone nonlinear horizontal stretch is applied, strong near the far tip and minimal near the anchor, to preserve the integrated tail mass. This keeps the interior CDF essentially unchanged, but turns blunt cutoffs into a controlled fade to zero. The result avoids wall like endings and replaces them with a taper that looks credible and also signals uncertainty.
Eventually, the detailed distributions are scaled by the number of registered workers and aggregated into major categories, and then into the overall “All Occupations” envelope. From that point on, the “only” remaining challenge was to arrange all these nested violins on a flat surface. I quickly learned that this was a third challenge: different in nature, outside the modeling zone, but still intriguing and difficult.
Once my geometry was ready, I had to decide how to place the violins. At the time, this felt like the “only” remaining challenge. That was a few months ago, when I first saw a visualization that encoded the same data with bubbles: occupations were positioned by their median wage, and bubble size represented the number of workers. It used a D3 beeswarm layout, basically a d3-force simulation that prevents overlaps and keeps the swarm compact.
I am a fan of beeswarms, and I have authored statistically driven density dot plots and density bars, but I am not a fan of D3 beeswarms. Not because they are poorly designed or unattractive. They are a beautiful piece of code by Mike Bostock. The issue is that they do not guarantee, or even try, to reconstruct the underlying distribution. They are physics driven arrangements, not statistically driven ones.
In my view, collapsing an entire occupation into a single median dot is the kind of shortcut the Grammar of Graphics makes easy to justify. It may not look like a major leap from a numerical summary, but it invites the wrong reading on both axes. On the wage axis, the real distribution spans far beyond the bubble. On the density axis, the swarm suggests structure it never actually reconstructs. This reminded me of something GoG promoters often overlook: the Grammar of Graphics is, at its core, a synthetic programming framework built for flexibility, not a blank check for arbitrary graphical combinations of numeric variables. Even its author acknowledges this in the book:
"This system (The Grammar of Graphics) is capable of producing some hideous graphics. There is nothing in its design to prevent its misuse."
Leland Wilkinson
Let’s be clear: once I committed to using the actual violin geometry, there was no realistic way to get a perfectly non overlapping arrangement, unless the violins happened to fit together like a jigsaw I did not know existed.
So I accepted the constraint and designed a placement algorithm with a clear set of priorities, in this order: minimize overlap, cover the major envelopes well, keep the large shapes visible, and make labels readable. The key point is that this is not a one off trick that works only because this dataset is “nice”. It is a general optimization idea, and it adapted well across all 22 major groups and the overall “All Occupations” view.
After countless iterations, tuning, and mixing ideas, the final layout ended up better than I expected. To control visibility as much as possible, I also leaned on drawing techniques that matter when the view gets dense: grouping, size dependent transparency, careful z ordering, clipping paths, and subtle edging that varies with size. The result is the violin layout you are exploring right now.
This project was slow work, and it truly looked like overthinking while it happened. Part of that is my own cognitive profile. “Nearly correct” does not calm me down. It usually triggers more determination than relief, because for me “good enough” often reads as “accidentally good,” not as “close to the actual solution.” The cases that worry me most are the ones that succeed often enough to look general, while still hiding a flaw in a narrow family of data. So I keep pushing until the method holds for those cases too, not just for the easy majority.
For me, when a solution holds entirely, and still holds under extreme test cases that were not even part of the dataset, I stop exploring. I delete variants, keep one path, and keep the dirt in the workshop. The app might look simple because the viewer sees the product in the window, not the cleaning process behind it.
Many wage charts treat pay like a scoreboard: one number, maybe two, and we call it a day. This project is my personal pushback. Wages are not a single fact. They are a spread, mainly a spread. For anyone who cares about jobs, budgets, hiring, negotiating, or simply why two people with the same title can live in different worlds, seeing the spread is the point.
This is not about being clever with graphics. It is about being honest to the numerical shape. For me, the goal is a view where the data stays visible, interaction helps the reader, and every choice in math, layout, styling, and UI earns its place by making interpretation clearer without changing meaning.
The median is useful. It is also a great way to hide important differences. Two occupations can share the same median wage while one has a tight cluster and the other has a long tail that stretches into a very different income life. If you have ever heard “the average salary is…” and immediately wondered “for whom?”, you already understand the motivation. That is why the violin is the core of the display, not a dot with decoration. Once you decide to draw a full wage shape from just a few anchors, you are making a promise. The cumulative curve (CDF) has to climb smoothly from 0 to 1 without stepping backward. The density curve (PDF) has to stay at or above zero, and it should not grow needles or cliffs just because the interpolation got ambitious. And the violin, which is just that density mirrored, should look like a plausible spread of wages, not like a machine glitch.
A lot of visualization methods look smooth for the same reason some stories look neat: they edit out the awkward parts. I tried not to do that. When I smooth or interpolate, it is not to make the picture prettier. It is to make it stable, readable, and constrained in the right ways. If a method can overshoot, break monotonicity, or add little wiggles that suggest extra structure, I treat it as a bug, not a feature. The rule is simple: do not manufacture information. If I compress, warp, or transform, it has to be explicit and reversible, so the axis is still something you can trust.
The placement problem is where “a chart” turns into “a system”. Violins are wide, irregular shapes. They do not behave like points. Once I committed to real geometry, I needed a layout method that accepts that constraint instead of pretending it is not there. So I wrote a placement algorithm that treats legibility as a first class goal. Its priorities are, in order: minimize overlap, cover the major envelopes well, keep the larger and more consequential shapes visible, and make labels readable. That order is not just technical. It is a statement about what a viewer needs to keep their bearings.
The rendering is built to support scanning. Major groups are separated through color and structure so the overview reads like an organized field, not a pile of marks. Clipping keeps the global view calm, but identification still works even when items are partly clipped, because hiding geometry should not mean hiding access. Edges get as much care as fills. Subtle strokes, thickness variation, and transparency are used to keep dense regions readable. Small shapes stay present without shouting. Large shapes carry weight without taking over the page.
The UI is built around one idea: exploration should be easy enough that you actually do it. The global view is the index. Focus mode is the reading lens. A thin slice of the global scale can contain dozens of occupations, and interaction is what makes that density usable. A double tap or click focuses a major category, and getting back is equally direct. Tooltips are not decoration. They are local explanations: compact, consistent, and designed to keep the same meaning while changing the level of detail. You can move from “where am I?” to “what does this shape mean?” without losing context.
The project is not about making wages look clever. It is about making the distribution usable: something you can read, compare, and navigate, without collapsing it into a single number, a slogan, or a pretty illustration. And yet, once the distribution is allowed to show itself, the result can be genuinely beautiful, not because of styling tricks, but because the structure was always there, hidden under a flat, inexpressive net.
I started this work aiming for a static visualization, basically an alternative to a beeswarm. It gradually turned into an interactive layout, and that ended up being the better direction. Working under the constraint of a single self contained file that runs anywhere HTML and JavaScript can render pushed the project in a few unexpected ways. The first versions were close to 30 MB. Through geometric optimization I brought it down to about 4 MB without removing the core information. The transitions are not the smoothest you will ever see, but they remain acceptable even on my older mobile devices. They are not there for artistic effect. They exist to keep a clear link between visual states, a bridge that helps the viewer track what changed. The work naturally split into three parts: reconstructing the violins, arranging the layout, and building the graphical dynamics and interaction. I used Python for the first two and for generating the primary graphical entities. I used HTML, CSS, and vanilla JavaScript for the interactive layer, with no external libraries.
Python, along with R, has basically become the common language of scientific data work. Coming from more than 30 years of C++, the hard part was not the syntax. It was the mindset shift. Once you start thinking in arrays, vectorization, and “let the data flow through operations,” a lot of your old habits stop being useful, and some become actively unhelpful.
Python is not a speed first language, and it is not built to make programmers feel virtuous. Code can get messy. Many of the low level memory and performance tricks that feel natural in C++ are either hidden or just not available in the same direct way. But the ecosystem is hard to argue with. So much of the heavy lifting is already implemented, compiled, and tuned, which means you can do serious numerical work without spending half your time wrestling the toolchain.
And then there is the current wave of development AI. It is not a replacement for clear thinking. If you do not already know how to program and validate results, it can confidently lead you into nonsense. But if you know what you are trying to build and you have good checks, it can be genuinely useful, like a fast assistant who still needs supervision.
The violin geometry and the layout are built in Python, exported as SVG, and then embedded into a single HTML file where every element can be manipulated interactively. CSS handles the look. JavaScript drives the behavior.
One early lesson was that the DOM is a terrible place to keep asking the same questions. So at startup, the JavaScript reads the SVG metadata and builds an internal, stateful model of the scene. From then on, interaction does not depend on repeatedly querying the DOM for data. The code can update styles, visibility, and hierarchy with minimal DOM churn, which matters when there are hundreds or thousands of paths on screen.
Coordinate handling is the other big piece. The view supports both linear and adaptive wage scales, and JavaScript applies the change as a reversible warp, not as a second separate rendering. Paths are transformed in place, so the same visual entities stay the same entities across modes. That makes transitions smoother and interaction more stable, because hover and focus always refer to the same underlying object, not to a regenerated copy.
Interaction itself is implemented as a small state machine. Pointer movement identifies what is under the cursor, then updates highlights and tooltips without rewriting the whole scene. The tooltip is treated as a primary reading surface. It can be pinned, it supports multiple internal views of the same distribution, and it responds to gestures and context actions. Its contents are not static labels stuck onto the SVG. They are computed and updated live from stored distribution data, including tick generation, bandwidth changes, and the mini plot rendering.
Focus mode is another place where JavaScript does real work. When a major category is focused, the code re-parents the relevant SVG groups into a dedicated wrapper, applies the clipping rules for that mode, and rescales the geometry to fit the plot box. It is done in a way that keeps alignment with the axes consistent, and it stays compatible with later mode changes like switching between linear and adaptive scales.
A lot of effort also goes into robustness, the unglamorous part that decides whether a tool survives contact with the real world. The code handles resizing, device differences, and browser zoom behavior so labels and interaction remain usable. It includes safe bounding box measurement and defensive geometry helpers, because in SVG the difference between “works” and “works everywhere” often lives in edge cases like fonts, transforms, and fractional pixels.
Finally, the JavaScript layer is deliberately dependency free. No external libraries makes the file portable and stable, but it also means the usual conveniences are replaced with custom, purpose built utilities: path parsing and rewriting, tick construction, gesture handling, caching, and small performance optimizations to keep the experience responsive on older devices.
Performance and stability were not afterthoughts here, because the whole design depends on responsiveness. A visualization like this only works if it stays predictable under real use: lots of shapes on screen, constant pointer movement, browser zoom, device rotation, and scale changes. If interaction stutters or geometry drifts, the reader stops trusting what they see.
The work is split on purpose. The expensive part happens offline in Python: reconstructing each violin from sparse quantiles under strict constraints, generating clean geometry, and solving the layout as an optimization problem. Those steps are heavy by nature, but they run once, with full numerical control and room for validation. The output is not just a picture. It is a prepared scene: paths, hierarchy, positions, and the metadata needed for interaction.
JavaScript plays a different role. It does not rebuild distributions or re-solve the layout. It treats the exported SVG as a precomputed dataset and focuses on interface work: what to show, how to style it, and how to move between visual states. Even the adaptive scale is handled as a mapping computed from prepared anchors and applied as a reversible warp. Switching modes is a geometry transform, not a recomputation. That keeps the underlying objects stable across states, which makes hover, focus, and tooltip logic reliable.
This division keeps runtime cost bounded. Most interactions are incremental updates: toggling clipped versus unclipped views, updating a highlight overlay, rewriting only the paths that need warping, and updating tooltip contents. The code is structured to avoid expensive full redraw behavior and to minimize DOM work, because in a dense SVG that is usually the real bottleneck. The result is not perfect smoothness, but it is consistent behavior across devices, including older mobile hardware, which matters more for a reading tool than chasing ideal animation.
Short definitions for terms used throughout the project.
y=f(x) from input x to output y.f'(x), the local rate of change (slope) of a function.f"(x), describes how slope changes, used as a proxy for curvature.y=f(x) it relates to f" and f'.C(x(t), y(t)) with a parameter t, useful when y is not a simple function f(x).F(x)=P(X<=x), monotone nondecreasing in x.Q(p)=F^{-1}(p), maps probability p to a value (wage).f(x)=dF/dx when it exists, integrates to 1 over the domain.F(x)=∫_{-∞}^{x} f(t) dt.f(x)=F'(x) in regions where F is differentiable.